Goto

Collaborating Authors

 social trust


AI and Trust

Communications of the ACM

This is a discussion about artificial intelligence (AI), trust, power, and integrity. There are two kinds of trust--interpersonal and social--and we regularly confuse them. What matters here is social trust, which is about reliability and predictability in society. Our confusion will increase with AI, and the corporations controlling AI will use that confusion to take advantage of us. This is a security problem. This is a confidentiality problem. But it is much more an integrity problem. And that integrity is going to be the primary security challenge for AI systems of the future. It's also a regulatory problem, and it is government's role to enable social trust, which means incentivizing trustworthy AI. Okay, so let's break that down. Trust is a complicated concept, and the word is overloaded with many different meanings. When we say we trust a friend, it is less about their specific actions and more about them as a person.


EU politicians back new rules on AI ahead of landmark vote

Al Jazeera

European politicians in two key committees have approved new rules to regulate artificial intelligence (AI) ahead of a landmark vote that could pave the way for the world's first legislation on the technology. On Tuesday, two committees in the European Parliament – on civil liberties and consumer protection – overwhelmingly endorsed the provisional legislation to ensure that AI complies with the protection of "fundamental rights". A vote in the legislative assembly is scheduled for April. The AI Act will aim to set guardrails on a technology being used in several industries, ranging from banking and cars to electronic products and airlines, as well as for security and police purposes. "At the same time, it aims to boost innovation and establishing Europe as a leader in the AI field," the parliament said in a statement.


Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos

#artificialintelligence

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don't exist. A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media. These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they're being used in marketing, advertising, and social media.


Deepfakes: Faces created by AI now look more real than genuine photos

#artificialintelligence

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don't exist. Recently, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media. These deep fakes are becoming widespread in everyday culture which means people should be more aware of how they're being used in marketing, advertising and social media.


Lahijanian

AAAI Conferences

The immense technological advancements in the past decade have enabled robots to enjoy high levels of autonomy, paving their way into our society. The recent catastrophic accidents involving autonomous systems (e.g., Tesla fatal car accident), however, show that sole engineering progress in the technology is not enough to guarantee a safe and productive partnership between a human and a robot. In this paper we argue that we also need to advance our understanding of the role of social trust within human-robot relationships, and formulate a theory for expressing and reasoning about trust in the context of decisions affecting collaboration or competition between humans and robots. Therefore, we call for cross-disciplinary collaborations to study the formalization of social trust in the context of human-robot relationship. We lay the groundwork for such a study in this paper.


Social Trust: A Major Challenge for the Future of Autonomous Systems

Lahijanian, Morteza (University of Oxford) | Kwiatkowska, Marta (University of Oxford)

AAAI Conferences

The immense technological advancements in the past decade have enabled robots to enjoy high levels of autonomy, paving their way into our society. The recent catastrophic accidents involving autonomous systems (e.g., Tesla fatal car accident), however, show that sole engineering progress in the technology is not enough to guarantee a safe and productive partnership between a human and a robot. In this paper we argue that we also need to advance our understanding of the role of social trust within human-robot relationships, and formulate a theory for expressing and reasoning about trust in the context of decisions affecting collaboration or competition between humans and robots. Therefore, we call for cross-disciplinary collaborations to study the formalization of social trust in the context of human-robot relationship. We lay the groundwork for such a study in this paper.


Building a Cognitive Model of Social Trust Within ACT-R

Kennedy, William G. (George Mason University) | Krueger, Frank (George Mason University)

AAAI Conferences

This paper describes work underway at the Krasnow Institute for Advanced Study on the topic of modeling social trust. We have built and are testing an ACT-R model intended to replicate human participants building and maintaining social trust using an economic investment game. We already have behavioral and fMRI imaging data for subjects which we expect to generate comparable data by having an ACT-R model read the same inputs the humans did and decide whether to trust or not their partner.